-
Note : I'm still studying about Odin's implementation of multithreading, so the notes here are basically me organizing the content I found around the source code and core:sync .
-
Ginger Bill:
-
"Odin does have numerous threading and synchronization primitives in its core library. But it does not have any parallelism/concurrency features built directly into the language itself because all of them require some form of automatic memory management which is a no-go."
-
-
"Odin handles threads similarly to how Go handles it".
core:thread
thread.create
-
create. -
Tutorial .
-
Create a thread in a suspended state with the given priority.
-
This procedure creates a thread that will be set to run the procedure specified by the
procedureparameter with a specified priority. The returned thread will be in a suspended state untilstart()procedure is called.
Thread Pool
-
Via thread.pool
-
Via dynamic array :
-
Stores pointer to a thread.
-
Tutorial .
arr := []int{1,2,3} main :: proc() { threadPool := make([dynamic]^thread.Thread, 0, len(arr)) defer delete(threadPool) } -
Channels
core:sync/chan
-
-
The tutorial is useful.
-
-
Tutorial .
-
This package provides both high-level and low-level channel types for thread-safe communication.
-
While channels are essentially thread-safe queues under the hood, their primary purpose is to facilitate safe communication between multiple readers and multiple writers. Although they can be used like queues, channels are designed with synchronization and concurrent messaging patterns in mind.
-
Provided types :
-
Chana high-level channel. -
Raw_Chana low-level channel. -
Raw_Queuea low-level non-threadsafe queue implementation used internally.
-
CPU Yield
-
cpu_relax-
This procedure may lower CPU consumption or yield to a hyperthreaded twin processor.
-
It's exact function is architecture specific, but the intent is to say that you're not doing much on a CPU.
-
Synchronization Primitives: Direct Comparisons
Comparing
Sema
vs
Atomic_Sema
-
Semais just a wrapper around_Semaimplementations depending on the OS, but , as there's only one implementation of_Semain the wholesynclibrary,SemaandAtomic_Semaends up being the same. -
It's just an edge case for consistency.
-
Blob:
-
Once upon a time there was a Wait Group based Semaphore, which could be switched to with a flag. Ya, I'd image it's just keep as is for a consistency. #d5886c1
-
Comparing
Mutex
vs
Atomic_Mutex
-
For any other OS :
-
It doesn't matter.
MutexusesAtomic_Mutexdirectly. It acts like a direct wrapper.
-
-
For Windows :
-
.
-
Comparing
RW_Mutex
vs
Atomic_RW_Mutex
-
For any other OS :
-
It doesn't matter.
RW_MutexusesAtomic_RW_Mutexdirectly. It acts like a direct wrapper.
-
-
For Windows :
-
.
-
Comparing
Cond
vs
Atomic_Cond
-
For any other OS :
-
It doesn't matter.
CondusesAtomic_Conddirectly. It acts like a direct wrapper.
-
-
For Windows :
-
Which one to use for Windows?
-
By default lots of implementations from other synchronization primitives use
Cond, so I guess I should stay with that one for consistency? I don't know. The implementation fromntdllseems more troublesome thenwin32, based on what I saw.
-
-
Using
Cond:-
The
win32.SleepConditionVariableSRWwill be used.
SleepConditionVariableSRW :: proc(ConditionVariable: ^CONDITION_VARIABLE, SRWLock: ^SRWLOCK, dwMilliseconds: DWORD, Flags: LONG) -> BOOL ----
Is a Win32 API function that blocks a thread until a condition variable is signaled, while using an SRW lock as the associated synchronization object.
-
Provide a lightweight, efficient way for threads to wait for a condition to change without spinning. It is the higher-level Win32 analogue to Linux futex-style waits and internally uses the wait-on-address mechanism.
-
ConditionVariable-
The condition variable to wait on.
-
-
SRWLock-
A previously acquired SRW lock (in shared or exclusive mode).
-
-
dwMilliseconds-
Timeout in milliseconds, or
INFINITE.
-
-
Flags-
CONDITION_VARIABLE_LOCKMODE_SHAREDif the lock was acquired in shared mode; -
0if it was acquired in exclusive mode.
-
-
How it works:
-
The caller must already hold the SRW lock.
-
The function atomically unlocks the SRW lock and puts the thread to sleep on the condition variable.
-
When awakened by
WakeConditionVariableorWakeAllConditionVariable, it reacquires the SRW lock before returning. -
The caller must recheck the condition because wake-ups may be spurious.
-
-
-
Using
Atomic_Cond:-
The
Futeximplementation for Windows will be used instead, which usesatomic_cond_wait -> Ntdll.RtlWaitOnAddress. -
ntdll.dllis the lowest-level user-mode runtime library in Windows, providing the Native API and the gateway to kernel system calls.-
The NT system call interface
-
It provides the user-mode entry points for system calls (
Nt*andZw*functions). These functions are thin wrappers that transition into kernel mode.
-
-
The Windows Native API (undocumented or semi-documented)
-
This includes functions prefixed with
Rtl*,Ldr*,Nt*, etc. They cover low-level tasks such as process/thread start-up, memory management helpers, loader functionality, string utilities, and synchronization primitives.
-
-
Process bootstrapping code
-
Every user-mode process loads
ntdll.dllfirst. It sets up the runtime before the main module’s entry point runs.
-
-
Support for critical subsystems
-
Exception dispatching
-
Thread local storage internals
-
Heap internals (working with the kernel)
-
Loader and module management
-
Atomically waiting/waking primitives (like
RtlWaitOnAddress)
-
-
It is not meant for application-level use. Many of its functions are undocumented, can change between Windows releases, and may break compatibility.
-
It is not the same as
kernel32.dlloruser32.dll. Those are higher-level and officially documented; they themselves call intontdll.dll.
-
RtlWaitOnAddress :: proc(Address: rawptr, CompareAddress: rawptr, AddressSize: uint, Timeout: ^i64) -> i32 ----
Rtl (Run-time library) + WaitOnAddress → “run-time library: wait on (a) memory address.”
-
"block the calling thread until the memory at a specified address no longer matches a given value (or a timeout/interrupt occurs)".
-
Atomically compares the bytes at
AddressToWaitOnwith the bytes pointed to byCompareAddress(sizeAddressSize). -
If they are equal, the caller is put to sleep by the kernel until either the memory changes, a timeout/interrupt occurs, or a wake is issued.
-
If they are different on first check, it returns immediately.
-
Ginger Bill:
-
For some bizarre reason,
timeouthas to be a negative number. -
WaitOnAddressis implemented on top ofRtlWaitOnAddressBUT requires taking the return value of it and if it is non-zero converting that status to a DOS error and thenSetLastErrorIf this is not done, then things don't work as expected when an error occurs GODDAMN MICROSOFT!
-
-
-
Atomics
Memory Order
-
See Multithreading#Atomics .
Implicit Memory Order
-
Non-explicit atomics will always be sequentially consistent (
.Seq_Cst).
Explicit Memory Order
-
In Odin there are 5 different memory ordering guaranties that can be provided to an atomic operation:
Atomic_Memory_Order :: enum {
Relaxed = 0, // Unordered
Consume = 1, // Monotonic
Acquire = 2,
Release = 3,
Acq_Rel = 4,
Seq_Cst = 5,
}
Operations
-
Most of the procedure have a "normal" and
_explicitvariant. -
The "normal" variant will always have a memory order sequentially consistent (
.Seq_Cst). -
The "normal" variant will always have a memory order defined by the
orderparameter (Atomic_Memory_Order); unless specified differently.
Load / Store
-
atomic_store/atomic_store_explicit-
Atomically store a value into memory.
-
This procedure stores a value to a memory location in such a way that no other thread is able to see partial reads.
-
-
atomic_load/atomic_load_explicit-
Atomically load a value from memory.
-
This procedure loads a value from a memory location in such a way that the received value is not a partial read.
-
-
atomic_exchange/atomic_exchange_explicit-
Atomically exchange the value in a memory location, with the specified value.
-
This procedure loads a value from the specified memory location, and stores the specified value into that memory location. Then the loaded value is returned, all done in a single atomic operation.
-
This operation is an atomic equivalent of the following:
tmp := dst^ dst^ = val return tmp -
Compare-Exchange
-
atomic_compare_exchange_strong/atomic_compare_exchange_strong_explicit-
Atomically compare and exchange the value with a memory location.
-
This procedure checks if the value pointed to by the
dstparameter is equal toold, and if they are, it stores the valuenewinto the memory location, all done in a single atomic operation. This procedure returns the old value stored in a memory location and a boolean value signifying whetheroldwas equal tonew. -
This procedure is an atomic equivalent of the following operation:
old_dst := dst^ if old_dst == old { dst^ = new return old_dst, true } else { return old_dst, false }-
The strong version of compare exchange always returns true, when the returned old value stored in location pointed to by
dstand theoldparameter are equal. -
Atomic compare exchange has two memory orderings: One is for the read-modify-write operation, if the comparison succeeds, and the other is for the load operation, if the comparison fails.
-
For the non-explicit version: The memory ordering for both of of these operations is sequentially-consistent.
-
For the explicit version: The memory ordering for these operations is as specified by
successandfailureparameters respectively.
-
-
atomic_compare_exchange_weak/atomic_compare_exchange_weak_explicit-
Atomically compare and exchange the value with a memory location.
-
This procedure checks if the value pointed to by the
dstparameter is equal toold, and if they are, it stores the valuenewinto the memory location, all done in a single atomic operation. This procedure returns the old value stored in a memory location and a boolean value signifying whetheroldwas equal tonew. -
This procedure is an atomic equivalent of the following operation:
old_dst := dst^ if old_dst == old { // may return false here dst^ = new return old_dst, true } else { return old_dst, false }-
The weak version of compare exchange may return false, even if
dst^ == old. -
On some platforms running weak compare exchange in a loop is faster than a strong version.
-
Atomic compare exchange has two memory orderings: One is for the read-modify-write operation, if the comparison succeeds, and the other is for the load operation, if the comparison fails.
-
For the non-explicit version: The memory ordering for both of of these operations is sequentially-consistent.
-
For the explicit version: The memory ordering for these operations is as specified by
successandfailureparameters respectively.
-
Arithmetic
-
atomic_add/atomic_add_explicit-
Atomically add a value to the value stored in memory.
-
This procedure loads a value from memory, adds the specified value to it, and stores it back as an atomic operation.
-
This operation is an atomic equivalent of the following:
-
dst^ += val
-
-
atomic_sub/atomic_sub_explicit-
Atomically subtract a value from the value stored in memory.
-
This procedure loads a value from memory, subtracts the specified value from it, and stores the result back as an atomic operation.
-
This operation is an atomic equivalent of the following:
-
dst^ -= val
-
-
Logical
-
atomic_and/atomic_and_explicit-
Atomically replace the memory location with the result of AND operation with the specified value.
-
This procedure loads a value from memory, calculates the result of AND operation between the loaded value and the specified value, and stores it back into the same memory location as an atomic operation.
-
This operation is an atomic equivalent of the following:
-
dst^ &= val
-
-
-
atomic_nand/atomic_nand_explicit-
Atomically replace the memory location with the result of NAND operation with the specified value.
-
This procedure loads a value from memory, calculates the result of NAND operation between the loaded value and the specified value, and stores it back into the same memory location as an atomic operation.
-
This operation is an atomic equivalent of the following:
-
dst^ = ~(dst^ & val)
-
-
atomic_or/atomic_or_explicit-
Atomically replace the memory location with the result of OR operation with the specified value.
-
This procedure loads a value from memory, calculates the result of OR operation between the loaded value and the specified value, and stores it back into the same memory location as an atomic operation.
-
This operation is an atomic equivalent of the following:
-
dst^ |= val
-
-
-
-
atomic_xor/atomic_xor_explicit-
Atomically replace the memory location with the result of XOR operation with the specified value.
-
This procedure loads a value from memory, calculates the result of XOR operation between the loaded value and the specified value, and stores it back into the same memory location as an atomic operation.
-
This operation is an atomic equivalent of the following:
-
dst^ ~= val
-
Ordering
-
atomic_thread_fence-
Establish memory ordering.
-
This procedure establishes memory ordering, without an associated atomic operation.
-
-
atomic_signal_fence-
Establish memory ordering between a current thread and a signal handler.
-
This procedure establishes memory ordering between a thread and a signal handler, that run on the same thread, without an associated atomic operation.
-
This procedure is equivalent to
atomic_thread_fence, except it doesn't issue any CPU instructions for memory ordering.
-
Barrier (
sync.Barrier
)
Cond :: struct {
impl: _Cond,
}
Mutex :: struct {
impl: _Mutex,
}
Barrier :: struct {
mutex: Mutex,
cond: Cond,
index: int,
generation_id: int,
thread_count: int,
}
-
For any other OS:
Futex :: distinct u32
Atomic_Cond :: struct {
state: Futex,
}
_Cond :: struct {
cond: Atomic_Cond,
}
Atomic_Mutex_State :: enum Futex {
Unlocked = 0,
Locked = 1,
Waiting = 2,
}
Atomic_Mutex :: struct {
state: Atomic_Mutex_State,
}
_Mutex :: struct {
mutex: Atomic_Mutex,
}
-
For Windows:
LPVOID :: rawptr
CONDITION_VARIABLE :: struct {
ptr: LPVOID,
}
_Cond :: struct {
cond: win32.CONDITION_VARIABLE,
}
SRWLOCK :: struct {
ptr: LPVOID,
}
_Mutex :: struct {
srwlock: win32.SRWLOCK,
}
-
See Multithreading#Barrier .
Example
THREAD_COUNT :: 4
threads: [THREAD_COUNT]^thread.Thread
sync.barrier_init(barrier, THREAD_COUNT)
for _, i in threads {
threads[i] = thread.create_and_start(proc(t: ^thread.Thread) {
// Same messages will be printed together but without any interleaving
fmt.println("Getting ready!")
sync.barrier_wait(barrier)
fmt.println("Off their marks they go!")
})
}
for t in threads {
thread.destroy(t)
}
Usage
-
-
Initializes the barrier for the specified amount of participant threads.
barrier_init :: proc "contextless" (b: ^Barrier, thread_count: int) { when ODIN_VALGRIND_SUPPORT { vg.helgrind_barrier_resize_pre(b, uint(thread_count)) } b.index = 0 b.generation_id = 0 b.thread_count = thread_count } -
-
-
Blocks the execution of the current thread, until all threads have reached the same point in the execution of the thread proc.
barrier_wait :: proc "contextless" (b: ^Barrier) -> (is_leader: bool) { when ODIN_VALGRIND_SUPPORT { vg.helgrind_barrier_wait_pre(b) } guard(&b.mutex) local_gen := b.generation_id b.index += 1 if b.index < b.thread_count { for local_gen == b.generation_id && b.index < b.thread_count { cond_wait(&b.cond, &b.mutex) } return false } b.index = 0 b.generation_id += 1 cond_broadcast(&b.cond) return true } -
Semaphore (
sync.Sema
)
Futex :: distinct u32
Atomic_Sema :: struct {
count: Futex,
}
_Sema :: struct {
atomic: Atomic_Sema,
}
Sema :: struct {
impl: _Sema,
}
-
See Multithreading#Semaphore .
-
Note : A semaphore must not be copied after first use (e.g., after posting to it). This is because, in order to coordinate with other threads, all threads must watch the same memory address to know when the lock has been released. Trying to use a copy of the lock at a different memory address will result in broken and unsafe behavior. For this reason, semaphores are marked as
#no_copy.
Usage
-
I'm not sure how to use this.
-
-
Increment the internal counter on a semaphore by the specified amount.
-
If any of the threads were waiting on the semaphore, up to
countof threads will continue the execution and enter the critical section. -
Internally it's just an
atomic_add_explicit+futex_signal/futex_broadcast.
atomic_sema_post :: proc "contextless" (s: ^Atomic_Sema, count := 1) { atomic_add_explicit(&s.count, Futex(count), .Release) if count == 1 { futex_signal(&s.count) } else { futex_broadcast(&s.count) } } _sema_post :: proc "contextless" (s: ^Sema, count := 1) { when ODIN_VALGRIND_SUPPORT { vg.helgrind_sem_post_pre(s) } atomic_sema_post(&s.impl.atomic, count) } sema_post :: proc "contextless" (s: ^Sema, count := 1) { _sema_post(s, count) } -
-
Wait on a semaphore until the internal counter is non-zero.
-
This procedure blocks the execution of the current thread, until the semaphore counter is non-zero, and atomically decrements it by one, once the wait has ended.
-
Internally it's just an
atomic_load_explicit+futex_wait+atomic_compare_exchange_strong_explicit.
atomic_sema_wait :: proc "contextless" (s: ^Atomic_Sema) { for { original_count := atomic_load_explicit(&s.count, .Relaxed) for original_count == 0 { futex_wait(&s.count, u32(original_count)) original_count = atomic_load_explicit(&s.count, .Relaxed) } if original_count == atomic_compare_exchange_strong_explicit(&s.count, original_count, original_count-1, .Acquire, .Acquire) { return } } } _sema_wait :: proc "contextless" (s: ^Sema) { atomic_sema_wait(&s.impl.atomic) when ODIN_VALGRIND_SUPPORT { vg.helgrind_sem_wait_post(s) } } sema_wait :: proc "contextless" (s: ^Sema) { _sema_wait(s) }
Benaphore (
sync.Benaphore
)
Futex :: distinct u32
Atomic_Sema :: struct {
count: Futex,
}
_Sema :: struct {
atomic: Atomic_Sema,
}
Sema :: struct {
impl: _Sema,
}
Benaphore :: struct {
counter: i32,
sema: Sema,
}
-
See Multithreading#Benaphore .
Usage
-
Seems like a Mutex + Semaphore combined?
-
-
Acquire a lock on a benaphore. If the lock on a benaphore is already held, this procedure also blocks the execution of the current thread, until the lock could be acquired.
-
Once a lock is acquired, all threads attempting to take a lock will be blocked from entering any critical sections associated with the same benaphore, until until the lock is released.
benaphore_lock :: proc "contextless" (b: ^Benaphore) { if atomic_add_explicit(&b.counter, 1, .Acquire) > 0 { sema_wait(&b.sema) } } -
-
Release a lock on a benaphore. If any of the threads are waiting on the lock, exactly one thread is allowed into a critical section associated with the same benaphore.
benaphore_unlock :: proc "contextless" (b: ^Benaphore) { if atomic_sub_explicit(&b.counter, 1, .Release) > 1 { sema_post(&b.sema) } }
Recursive Benaphore (
sync.Recursive_Benaphore
)
Futex :: distinct u32
Atomic_Sema :: struct {
count: Futex,
}
_Sema :: struct {
atomic: Atomic_Sema,
}
Sema :: struct {
impl: _Sema,
}
Recursive_Benaphore :: struct {
counter: int,
owner: int,
recursion: i32,
sema: Sema,
}
See Multithreading#Recursive Benaphore .
Usage
-
-
Acquire a lock on a recursive benaphore. If the benaphore is held by another thread, this function blocks until the lock can be acquired.
-
Once a lock is acquired, all other threads attempting to acquire a lock will be blocked from entering any critical sections associated with the same recursive benaphore, until the lock is released.
recursive_benaphore_lock :: proc "contextless" (b: ^Recursive_Benaphore) { tid := current_thread_id() check_owner: if tid != atomic_load_explicit(&b.owner, .Acquire) { atomic_add_explicit(&b.counter, 1, .Relaxed) if _, ok := atomic_compare_exchange_strong_explicit(&b.owner, 0, tid, .Release, .Relaxed); ok { break check_owner } sema_wait(&b.sema) atomic_store_explicit(&b.owner, tid, .Release) } // inside the lock b.recursion += 1 } -
-
Release a lock on a recursive benaphore. It also causes the critical sections associated with the same benaphore, to become open for other threads for entering.
recursive_benaphore_unlock :: proc "contextless" (b: ^Recursive_Benaphore) { tid := current_thread_id() assert_contextless(tid == atomic_load_explicit(&b.owner, .Relaxed), "tid != b.owner") b.recursion -= 1 recursion := b.recursion if recursion == 0 { if atomic_sub_explicit(&b.counter, 1, .Relaxed) == 1 { atomic_store_explicit(&b.owner, 0, .Release) } else { sema_post(&b.sema) } } // outside the lock }
Auto Reset Event (
sync.Auto_Reset_Event
)
Auto_Reset_Event :: struct {
status: i32,
sema: Sema,
}
Usage
-
Status :
-
status == 0: Event is reset and no threads are waiting -
status == 1: Event is signalled -
status == -´N: Event is reset and N threads are waiting
-
Mutex (
sync.Mutex
)
Mutex :: struct {
impl: _Mutex,
}
-
For any other OS:
Atomic_Mutex_State :: enum Futex {
Unlocked = 0,
Locked = 1,
Waiting = 2,
}
Atomic_Mutex :: struct {
state: Atomic_Mutex_State,
}
_Mutex :: struct {
mutex: Atomic_Mutex,
}
-
For Windows:
LPVOID :: rawptr
SRWLOCK :: struct {
ptr: LPVOID,
}
_Mutex :: struct {
srwlock: win32.SRWLOCK,
}
-
Note : A Mutex must not be copied after first use (e.g., after locking it the first time). This is because, in order to coordinate with other threads, all threads must watch the same memory address to know when the lock has been released. Trying to use a copy of the lock at a different memory address will result in broken and unsafe behavior. For this reason, Mutexes are marked as
#no_copy. -
Note : If the current thread attempts to lock a mutex, while it's already holding another lock, that will cause a trivial case of deadlock. Do not use
Mutexin recursive functions. In case multiple locks by the same thread are desired, useRecursive_Mutex.
Usage
-
-
Returns
trueif success,falseif failure.
-
-
-
Scope
lock+unlock.
-
-
-
Wait until the condition variable is signalled and release the associated mutex.
-
This procedure blocks the current thread until the specified condition variable is signalled, or until a spurious wakeup occurs. In addition, if the condition has been signalled, this procedure releases the lock on the specified mutex.
-
The mutex must be held by the calling thread, before calling the procedure.
-
Note : This procedure can return on a spurious wake-up, even if the condition variable was not signalled by a thread.
Futex (
sync.Futex
)
Futex :: distinct u32
-
Uses a pointer to a 32-bit value as an identifier of the queue of waiting threads. The value pointed to by that pointer can be used to store extra data.
-
IMPORTANT : A futex must not be copied after first use (e.g., after waiting on it the first time, or signalling it). This is because, in order to coordinate with other threads, all threads must watch the same memory address. Trying to use a copy of the lock at a different memory address will result in broken and unsafe behavior.
Usage
-
The implementations of the functions are heavy OS-dependent.
-
-
Notify one thread.
-
-
-
Notify all threads.
-
One Shot Event (
sync.One_Shot_Event
)
Futex :: distinct u32
One_Shot_Event :: struct {
state: Futex,
}
Usage
Parker (
sync.Parker
)
Futex :: distinct u32
Parker :: struct {
state: Futex,
}
-
See Multithreading#Parker .
Usage
Read-Write Mutex (
sync.RW_Mutex
) / (
sys_windows.SRWLock
)
RW_Mutex :: struct {
impl: _RW_Mutex,
}
-
For any other OS:
Futex :: distinct u32
Atomic_RW_Mutex_State :: distinct uint
Atomic_Mutex_State :: enum Futex {
Unlocked = 0,
Locked = 1,
Waiting = 2,
}
Atomic_Mutex :: struct {
state: Atomic_Mutex_State,
}
Atomic_Sema :: struct {
count: Futex,
}
Atomic_RW_Mutex :: struct {
state: Atomic_RW_Mutex_State,
mutex: Atomic_Mutex,
sema: Atomic_Sema,
}
_RW_Mutex :: struct {
mutex: Atomic_RW_Mutex,
}
-
For Windows:
LPVOID :: rawptr
SRWLOCK :: struct {
ptr: LPVOID,
}
_RW_Mutex :: struct {
srwlock: win32.SRWLOCK,
// The same as _Mutex for Windows.
}
-
Note : A read-write mutex must not be copied after first use (e.g., after acquiring a lock). This is because, in order to coordinate with other threads, all threads must watch the same memory address to know when the lock has been released. Trying to use a copy of the lock at a different memory address will result in broken and unsafe behavior. For this reason, mutexes are marked as
#no_copy. -
Note : A read-write mutex is not recursive. Do not attempt to acquire an exclusive lock more than once from the same thread, or an exclusive and shared lock on the same thread. Taking a shared lock multiple times is acceptable.
Usage
Once (
sync.Once
)
Once :: struct {
m: Mutex,
done: bool,
}
-
See Multithreading#Once .
Usage
-
-
once_do_without_data :: proc(o: ^Once, fn: proc()) { @(cold) do_slow :: proc(o: ^Once, fn: proc()) { guard(&o.m) if !o.done { fn() atomic_store_explicit(&o.done, true, .Release) } } if atomic_load_explicit(&o.done, .Acquire) == false { do_slow(o, fn) } } -
once_do_with_data :: proc(o: ^Once, fn: proc(data: rawptr), data: rawptr) { @(cold) do_slow :: proc(o: ^Once, fn: proc(data: rawptr), data: rawptr) { guard(&o.m) if !o.done { fn(data) atomic_store_explicit(&o.done, true, .Release) } } if atomic_load_explicit(&o.done, .Acquire) == false { do_slow(o, fn, data) } }
-
Ticket Mutex (
sync.Ticket_Mutex
)
Ticket_Mutex :: struct {
ticket: uint,
serving: uint,
}
Usage
Condition Variable (
sync.Cond
)
Cond :: struct {
impl: _Cond,
}
-
For any other OS:
Futex :: distinct u32
Atomic_Cond :: struct {
state: Futex,
}
_Cond :: struct {
cond: Atomic_Cond,
}
-
For Windows:
LPVOID :: rawptr
CONDITION_VARIABLE :: struct {
ptr: LPVOID,
}
_Cond :: struct {
cond: win32.CONDITION_VARIABLE,
}
-
Note : A condition variable must not be copied after first use (e.g., after waiting on it the first time). This is because, in order to coordinate with other threads, all threads must watch the same memory address to know when the lock has been released. Trying to use a copy of the lock at a different memory address will result in broken and unsafe behavior. For this reason, condition variables are marked as
#no_copy.
Usage
Wait Group (
sync.Wait_Group
)
Wait_Group :: struct {
counter: int,
mutex: Mutex,
cond: Cond,
}
-
Note : Just like any synchronization primitives, a wait group cannot be copied after first use.